skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Editors contains: "Etessami, Kousha"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    In [Saunderson, 2011; Saunderson et al., 2013], Saunderson, Parrilo, and Willsky asked the following elegant geometric question: what is the largest m = m(d) such that there is an ellipsoid in ℝ^d that passes through v_1, v_2, …, v_m with high probability when the v_is are chosen independently from the standard Gaussian distribution N(0,I_d)? The existence of such an ellipsoid is equivalent to the existence of a positive semidefinite matrix X such that v_i^⊤ X v_i = 1 for every 1 ⩽ i ⩽ m - a natural example of a random semidefinite program. SPW conjectured that m = (1-o(1)) d²/4 with high probability. Very recently, Potechin, Turner, Venkat and Wein [Potechin et al., 2022] and Kane and Diakonikolas [Kane and Diakonikolas, 2022] proved that m ≳ d²/polylog(d) via a certain natural, explicit construction. In this work, we give a substantially tighter analysis of their construction to prove that m ≳ d²/C for an absolute constant C > 0. This resolves one direction of the SPW conjecture up to a constant. Our analysis proceeds via the method of Graphical Matrix Decomposition that has recently been used to analyze correlated random matrices arising in various areas [Barak et al., 2019; Bafna et al., 2022]. Our key new technical tool is a refined method to prove singular value upper bounds on certain correlated random matrices that are tight up to absolute dimension-independent constants. In contrast, all previous methods that analyze such matrices lose logarithmic factors in the dimension. 
    more » « less
  2. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    We study the time complexity of the discrete k-center problem and related (exact) geometric set cover problems when k or the size of the cover is small. We obtain a plethora of new results: - We give the first subquadratic algorithm for rectilinear discrete 3-center in 2D, running in Õ(n^{3/2}) time. - We prove a lower bound of Ω(n^{4/3-δ}) for rectilinear discrete 3-center in 4D, for any constant δ > 0, under a standard hypothesis about triangle detection in sparse graphs. - Given n points and n weighted axis-aligned unit squares in 2D, we give the first subquadratic algorithm for finding a minimum-weight cover of the points by 3 unit squares, running in Õ(n^{8/5}) time. We also prove a lower bound of Ω(n^{3/2-δ}) for the same problem in 2D, under the well-known APSP Hypothesis. For arbitrary axis-aligned rectangles in 2D, our upper bound is Õ(n^{7/4}). - We prove a lower bound of Ω(n^{2-δ}) for Euclidean discrete 2-center in 13D, under the Hyperclique Hypothesis. This lower bound nearly matches the straightforward upper bound of Õ(n^ω), if the matrix multiplication exponent ω is equal to 2. - We similarly prove an Ω(n^{k-δ}) lower bound for Euclidean discrete k-center in O(k) dimensions for any constant k ≥ 3, under the Hyperclique Hypothesis. This lower bound again nearly matches known upper bounds if ω = 2. - We also prove an Ω(n^{2-δ}) lower bound for the problem of finding 2 boxes to cover the largest number of points, given n points and n boxes in 12D . This matches the straightforward near-quadratic upper bound. 
    more » « less
  3. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    To guarantee all agents are matched in general, the classic Deferred Acceptance algorithm needs complete preference lists. In practice, preference lists are short, yet stable matching still works well. This raises two questions: Why does it work well? Which proposals should agents include in their preference lists? We study these questions in a model, introduced by Lee, with preferences based on correlated cardinal utilities: these utilities are based on common public ratings of each agent together with individual private adjustments. Lee showed that for suitable utility functions, in large markets, with high probability, for most agents, all stable matchings yield similar valued utilities. By means of a new analysis, we strengthen Lee's result, showing that in large markets, with high probability, for all but the agents with the lowest public ratings, all stable matchings yield similar valued utilities. We can then deduce that for all but the agents with the lowest public ratings, each agent has an easily identified length O(log n) preference list that includes all of its stable matches, addressing the second question above. We note that this identification uses an initial communication phase. We extend these results to settings where the two sides have unequal numbers of agents, to many-to-one settings, e.g. employers and workers, and we also show the existence of an epsilon-Bayes-Nash equilibrium in which every agent makes relatively few proposals. These results all rely on a new technique for sidestepping the conditioning between the tentative matching events that occur over the course of a run of the Deferred Acceptance algorithm. We complement these theoretical results with an experimental study. 
    more » « less
  4. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    A recent breakthrough work of Limaye, Srinivasan and Tavenas [Nutan Limaye et al., 2021] proved superpolynomial lower bounds for low-depth arithmetic circuits via a "hardness escalation" approach: they proved lower bounds for low-depth set-multilinear circuits and then lifted the bounds to low-depth general circuits. In this work, we prove superpolynomial lower bounds for low-depth circuits by bypassing the hardness escalation, i.e., the set-multilinearization, step. As set-multilinearization comes with an exponential blow-up in circuit size, our direct proof opens up the possibility of proving an exponential lower bound for low-depth homogeneous circuits by evading a crucial bottleneck. Our bounds hold for the iterated matrix multiplication and the Nisan-Wigderson design polynomials. We also define a subclass of unrestricted depth homogeneous formulas which we call unique parse tree (UPT) formulas, and prove superpolynomial lower bounds for these. This significantly generalizes the superpolynomial lower bounds for regular formulas [Neeraj Kayal et al., 2014; Hervé Fournier et al., 2015]. 
    more » « less
  5. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    We present an efficient quantum algorithm for simulating the dynamics of Markovian open quantum systems. The performance of our algorithm is similar to the previous state-of-the-art quantum algorithm, i.e., it scales linearly in evolution time and poly-logarithmically in inverse precision. However, our algorithm is conceptually cleaner, and it only uses simple quantum primitives without compressed encoding. Our approach is based on a novel mathematical treatment of the evolution map, which involves a higher-order series expansion based on Duhamel’s principle and approximating multiple integrals using scaled Gaussian quadrature. Our method easily generalizes to simulating quantum dynamics with time-dependent Lindbladians. Furthermore, our method of approximating multiple integrals using scaled Gaussian quadrature could potentially be used to produce a more efficient approximation of time-ordered integrals, and therefore can simplify existing quantum algorithms for simulating time-dependent Hamiltonians based on a truncated Dyson series. 
    more » « less
  6. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary - for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. Pseudo-deterministic algorithms combat this issue, and for every input, they output with high probability the same "canonical" solution. We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most n. Morris’s counter [CACM, 1978] is a randomized approximation algorithm for this problem that uses O(log log n) bits of space, for every fixed approximation factor (greater than 1). Goldwasser, Grossman, Mohanty and Woodruff [ITCS 2020] asked whether pseudo-deterministic approximation algorithms can match this space complexity. Our main result answers their question negatively, and shows that such algorithms must use Ω(√{log n / log log n}) bits of space. Our approach is based on a problem that we call Shift Finding, and may be of independent interest. In this problem, one has query access to a shifted version of a known string F ∈ {0,1}^{3n}, which is guaranteed to start with n zeros and end with n ones, and the goal is to find the unknown shift using a small number of queries. We provide for this problem an algorithm that uses O(√n) queries. It remains open whether poly(log n) queries suffice; if true, then our techniques immediately imply a nearly-tight Ω(log n/log log n) space bound for pseudo-deterministic approximate counting. 
    more » « less
  7. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    This work continues the study of linear error correcting codes against adversarial insertion deletion errors (insdel errors). Previously, the work of Cheng, Guruswami, Haeupler, and Li [Kuan Cheng et al., 2021] showed the existence of asymptotically good linear insdel codes that can correct arbitrarily close to 1 fraction of errors over some constant size alphabet, or achieve rate arbitrarily close to 1/2 even over the binary alphabet. As shown in [Kuan Cheng et al., 2021], these bounds are also the best possible. However, known explicit constructions in [Kuan Cheng et al., 2021], and subsequent improved constructions by Con, Shpilka, and Tamo [Con et al., 2022] all fall short of meeting these bounds. Over any constant size alphabet, they can only achieve rate < 1/8 or correct < 1/4 fraction of errors; over the binary alphabet, they can only achieve rate < 1/1216 or correct < 1/54 fraction of errors. Apparently, previous techniques face inherent barriers to achieve rate better than 1/4 or correct more than 1/2 fraction of errors. In this work we give new constructions of such codes that meet these bounds, namely, asymptotically good linear insdel codes that can correct arbitrarily close to 1 fraction of errors over some constant size alphabet, and binary asymptotically good linear insdel codes that can achieve rate arbitrarily close to 1/2. All our constructions are efficiently encodable and decodable. Our constructions are based on a novel approach of code concatenation, which embeds the index information implicitly into codewords. This significantly differs from previous techniques and may be of independent interest. Finally, we also prove the existence of linear concatenated insdel codes with parameters that match random linear codes, and propose a conjecture about linear insdel codes. 
    more » « less
  8. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of 𝓁_p-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model. In this paper, we study online allocations in the learning-augmented setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a general algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, 𝓁_p-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters. 
    more » « less
  9. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    Motivated by recent progress on stochastic matching with few queries, we embark on a systematic study of the sparsification of stochastic packing problems more generally. Specifically, we consider packing problems where elements are independently active with a given probability p, and ask whether one can (non-adaptively) compute a "sparse" set of elements guaranteed to contain an approximately optimal solution to the realized (active) subproblem. We seek structural and algorithmic results of broad applicability to such problems. Our focus is on computing sparse sets containing on the order of d feasible solutions to the packing problem, where d is linear or at most polynomial in 1/p. Crucially, we require d to be independent of the number of elements, or any parameter related to the "size" of the packing problem. We refer to d as the "degree" of the sparsifier, as is consistent with graph theoretic degree in the special case of matching. First, we exhibit a generic sparsifier of degree 1/p based on contention resolution. This sparsifier’s approximation ratio matches the best contention resolution scheme (CRS) for any packing problem for additive objectives, and approximately matches the best monotone CRS for submodular objectives. Second, we embark on outperforming this generic sparsifier for additive optimization over matroids and their intersections, as well as weighted matching. These improved sparsifiers feature different algorithmic and analytic approaches, and have degree linear in 1/p. In the case of a single matroid, our sparsifier tends to the optimal solution. In the case of weighted matching, we combine our contention-resolution-based sparsifier with technical approaches of prior work to improve the state of the art ratio from 0.501 to 0.536. Third, we examine packing problems with submodular objectives. We show that even the simplest such problems do not admit sparsifiers approaching optimality. We then outperform our generic sparsifier for some special cases with submodular objectives. 
    more » « less
  10. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    Many deployments of differential privacy in industry are in the local model, where each party releases its private information via a differentially private randomizer. We study triangle counting in the noninteractive and interactive local model with edge differential privacy (that, intuitively, requires that the outputs of the algorithm on graphs that differ in one edge be indistinguishable). In this model, each party’s local view consists of the adjacency list of one vertex. In the noninteractive model, we prove that additive Ω(n²) error is necessary, where n is the number of nodes. This lower bound is our main technical contribution. It uses a reconstruction attack with a new class of linear queries and a novel mix-and-match strategy of running the local randomizers with different completions of their adjacency lists. It matches the additive error of the algorithm based on Randomized Response, proposed by Imola, Murakami and Chaudhuri (USENIX2021) and analyzed by Imola, Murakami and Chaudhuri (CCS2022) for constant ε. We use a different postprocessing of Randomized Response and provide tight bounds on the variance of the resulting algorithm. In the interactive setting, we prove a lower bound of Ω(n^{3/2}) on the additive error. Previously, no hardness results were known for interactive, edge-private algorithms in the local model, except for those that follow trivially from the results for the central model. Our work significantly improves on the state of the art in differentially private graph analysis in the local model. 
    more » « less